Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Oncol ; 14: 1347856, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38454931

RESUMO

With over 2.1 million new cases of breast cancer diagnosed annually, the incidence and mortality rate of this disease pose severe global health issues for women. Identifying the disease's influence is the only practical way to lessen it immediately. Numerous research works have developed automated methods using different medical imaging to identify BC. Still, the precision of each strategy differs based on the available resources, the issue's nature, and the dataset being used. We proposed a novel deep bottleneck convolutional neural network with a quantum optimization algorithm for breast cancer classification and diagnosis from mammogram images. Two novel deep architectures named three-residual blocks bottleneck and four-residual blocks bottle have been proposed with parallel and single paths. Bayesian Optimization (BO) has been employed to initialize hyperparameter values and train the architectures on the selected dataset. Deep features are extracted from the global average pool layer of both models. After that, a kernel-based canonical correlation analysis and entropy technique is proposed for the extracted deep features fusion. The fused feature set is further refined using an optimization technique named quantum generalized normal distribution optimization. The selected features are finally classified using several neural network classifiers, such as bi-layered and wide-neural networks. The experimental process was conducted on a publicly available mammogram imaging dataset named INbreast, and a maximum accuracy of 96.5% was obtained. Moreover, for the proposed method, the sensitivity rate is 96.45, the precision rate is 96.5, the F1 score value is 96.64, the MCC value is 92.97%, and the Kappa value is 92.97%, respectively. The proposed architectures are further utilized for the diagnosis process of infected regions. In addition, a detailed comparison has been conducted with a few recent techniques showing the proposed framework's higher accuracy and precision rate.

2.
Biomimetics (Basel) ; 8(5)2023 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-37754189

RESUMO

In recent years, disease attacks have posed continuous threats to agriculture and caused substantial losses in the economy. Thus, early detection and classification could minimize the spread of disease and help to improve yield. Meanwhile, deep learning has emerged as the significant approach to detecting and classifying images. The classification performed using the deep learning approach mainly relies on large datasets to prevent overfitting problems. The Automatic Segmentation and Hyper Parameter Optimization Artificial Rabbits Algorithm (AS-HPOARA) is developed to overcome the above-stated issues. It aims to improve plant leaf disease classification. The Plant Village dataset is used to assess the proposed AS-HPOARA approach. Z-score normalization is performed to normalize the images using the dataset's mean and standard deviation. Three augmentation techniques are used in this work to balance the training images: rotation, scaling, and translation. Before classification, image augmentation reduces overfitting problems and improves the classification accuracy. Modified UNet employs a more significant number of fully connected layers to better represent deeply buried characteristics; it is considered for segmentation. To convert the images from one domain to another in a paired manner, the classification is performed by HPO-based ARA, where the training data get increased and the statistical bias is eliminated to improve the classification accuracy. The model complexity is minimized by tuning the hyperparameters that reduce the overfitting issue. Accuracy, precision, recall, and F1 score are utilized to analyze AS-HPOARA's performance. Compared to the existing CGAN-DenseNet121 and RAHC_GAN, the reported results show that the accuracy of AS-HPOARA for ten classes is high at 99.7%.

3.
Sensors (Basel) ; 23(6)2023 Mar 10.
Artigo em Inglês | MEDLINE | ID: mdl-36991714

RESUMO

BACKGROUND: Continuous surveillance helps people with diabetes live better lives. A wide range of technologies, including the Internet of Things (IoT), modern communications, and artificial intelligence (AI), can assist in lowering the expense of health services. Due to numerous communication systems, it is now possible to provide customized and distant healthcare. MAIN PROBLEM: Healthcare data grows daily, making storage and processing challenging. We provide intelligent healthcare structures for smart e-health apps to solve the aforesaid problem. The 5G network must offer advanced healthcare services to meet important requirements like large bandwidth and excellent energy efficacy. METHODOLOGY: This research suggested an intelligent system for diabetic patient tracking based on machine learning (ML). The architectural components comprised smartphones, sensors, and smart devices, to gather body dimensions. Then, the preprocessed data is normalized using the normalization procedure. To extract features, we use linear discriminant analysis (LDA). To establish a diagnosis, the intelligent system conducted data classification utilizing the suggested advanced-spatial-vector-based Random Forest (ASV-RF) in conjunction with particle swarm optimization (PSO). RESULTS: Compared to other techniques, the simulation's outcomes demonstrate that the suggested approach offers greater accuracy.


Assuntos
Diabetes Mellitus , Telemedicina , Humanos , Inteligência Artificial , Aprendizado de Máquina , Sistemas de Identificação de Pacientes
4.
Diagnostics (Basel) ; 13(3)2023 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-36766640

RESUMO

Malaria is predominant in many subtropical nations with little health-monitoring infrastructure. To forecast malaria and condense the disease's impact on the population, time series prediction models are necessary. The conventional technique of detecting malaria disease is for certified technicians to examine blood smears visually for parasite-infected RBC (red blood cells) underneath a microscope. This procedure is ineffective, and the diagnosis depends on the individual performing the test and his/her experience. Automatic image identification systems based on machine learning have previously been used to diagnose malaria blood smears. However, so far, the practical performance has been insufficient. In this paper, we have made a performance analysis of deep learning algorithms in the diagnosis of malaria disease. We have used Neural Network models like CNN, MobileNetV2, and ResNet50 to perform this analysis. The dataset was extracted from the National Institutes of Health (NIH) website and consisted of 27,558 photos, including 13,780 parasitized cell images and 13,778 uninfected cell images. In conclusion, the MobileNetV2 model outperformed by achieving an accuracy rate of 97.06% for better disease detection. Also, other metrics like training and testing loss, precision, recall, fi-score, and ROC curve were calculated to validate the considered models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...